11 research outputs found

    Statistical and machine learning for credit and market risk management

    Get PDF
    Finanzinstitute spielen eine wichtige Rolle für die Stabilität des Finanzsektors. Sie bekleiden eine entscheidende Rolle als Intermediäre bei der Bereitstellung von Geld und Krediten sowie bei der Übertragung von Risiken zwischen Unternehmen. Diese Intermediärfunktion setzt die Finanzinstitute jedoch verschiedenen Arten von Risiken aus. Die Identifizierung und Messung dieser Risiken ist besonders in schwierigen Zeiten wichtig, in denen ein angeschlagener Finanzsektor zu einem Rückgang der Kreditvergabe führen kann. Vor allem in Zeiten des wirtschaftlichen Abschwungs ist die Rolle der Bereitstellung von Liquidität und Krediten wichtiger denn je. Daher ist die genaue Schätzung der Determinanten für verschiedene Risikoquellen eine äußerst wichtige Aufgabe für die Wirtschaft im Allgemeinen und für Finanzinstitute im Besonderen. In den letzten Jahrzehnten sind die Rechenleistung und die Speicherkapazitäten erheblich gestiegen, während die Kosten stark gesunken sind. Dies ermöglicht es Forschern und Praktikern, fortschrittlichere und rechenintensivere Modelle zu verwenden. Dies ist besonders wichtig für Modelle des maschinellen Lernens, aber auch für Bayesianische Modelle. Die Arbeit beleuchtet die Anwendung fortgeschrittener statistischer und maschineller Lernmethoden für das Kredit- und Marktrisikomanagement. Diese Anwendungen werden in vier unabhängigen Forschungsarbeiten behandelt. Die erste befasst sich mit fortgeschrittenen Bayesianischen Methoden, um den schwierigen Risikoparameter Exposure at Default (EAD) und sein Verhalten in Abschwungphasen zu untersuchen. Das zweite Papier konzentriert sich auf die Kombination von statistischen und maschinellen Lernmethoden, um verschiedene Aspekte der Verlustquote (Loss Given Default, LGD) zu eruieren, wobei ein besonderer Schwerpunkt auf Methoden zur Erklärbarkeit von Maschinellem Lernen liegt. Das dritte Forschungspapier wendet neuronale Netze für die Kalibrierung von Finanzmodellen an, wobei ein besonderer Schwerpunkt auf ihrem Nutzen in der Praxis liegt. Das letzte Forschungspapier befasst sich eingehend mit Nichtlinearität, die mit den Bewegungen der Aktienmärkte einhergeht

    Credit line exposure at default modelling using Bayesian mixed effect quantile regression

    Get PDF
    For banks, credit lines play an important role exposing both liquidity and credit risk. In the advanced internal ratings-based approach, banks are obliged to use their own estimates of exposure at default using credit conversion factors. For volatile segments, additional downturn estimates are required. Using the world's largest database of defaulted credit lines from the US and Europe and macroeconomic variables, we apply a Bayesian mixed effect quantile regression and find strongly varying covariate effects over the whole conditional distribution of credit conversion factors and especially between United States and Europe. If macroeconomic variables do not provide adequate downturn estimates, the model is enhanced by random effects. Results from European credit lines suggest that high conversion factors are driven by random effects rather than observable covariates. We further show that the impact of the economic surrounding highly depends on the level of utilization one year prior default, suggesting that credit lines with high drawdown potential are most affected by economic downturns and hence bear the highest risk in crisis periods

    Introducing an Interpretable Deep Learning Approach to Domain-Specific Dictionary Creation: A Use Case for Conflict Prediction

    Get PDF
    Recent advancements in natural language processing (NLP) methods have significantly improved their performance. However, more complex NLP models are more difficult to interpret and computationally expensive. Therefore, we propose an approach to dictionary creation that carefully balances the trade-off between complexity and interpretability. This approach combines a deep neural network architecture with techniques to improve model explainability to automatically build a domain-specific dictionary. As an illustrative use case of our approach, we create an objective dictionary that can infer conflict intensity from text data. We train the neural networks on a corpus of conflict reports and match them with conflict event data. This corpus consists of over 14,000 expert-written International Crisis Group (ICG) CrisisWatch reports between 2003 and 2021. Sensitivity analysis is used to extract the weighted words from the neural network to build the dictionary. In order to evaluate our approach, we compare our results to state-of-the-art deep learning language models, text-scaling methods, as well as standard, nonspecialized, and conflict event dictionary approaches. We are able to show that our approach outperforms other approaches while retaining interpretability

    Using boundary objects to make students brokers across disciplines: A dialogue between students and their lecturers on Bertolini’S node-place model

    Get PDF
    The competencies required for steering urban development sustainably are scattered amongst various disciplines. This is particularly relevant for planners working at the interface of different sub-disciplines, such as transport and land-use planning, exemplified by transit-oriented development (TOD). In this paper, we use Bertolini’s node-place model (NPM) example for TOD to test whether it enables interdisciplinary work to be undertaken in planning education. We tested our hypothesis in two design studios by challenging urban design students to develop their own design brief based on an NPM. The paper is of a dialogic, discursive nature. Students discuss whether or not the NPM enables them to better understand the relationship between transit and urban development and to develop spatial strategies based upon an integrative approach. Our discussion reveals that the NPM cannot necessarily bridge disciplinary boundaries successfully. However, both lecturers and students see value in the model as a didactic instrument

    Opening the black box – Quantile neural networks for loss given default prediction

    No full text
    We extend the linear quantile regression with a neural network structure to enable more flexibility in every quantile of the bank loan loss given default distribution. This allows us to model interactions and non-linear impacts of any kind without the need of specifying the exact form beforehand. The precision of the quantile forecasts increases up to 30% compared to the benchmark, especially for higher quantiles which are most important in credit risk. By using a novel feature importance measure, we calculate the strength, direction, interactions and other non-linear impacts for every conditional quantile and every variable. This enables us to explain why our extension exhibits superior performance over the benchmark. Moreover, we find that the macroeconomy is up to two times more important in USA than in Europe and has large joint impacts in both regions. The macroeconomy is most important in the US, whereas in Europe collateralization is essential

    Parameter estimation, bias correction and uncertainty quantification in the Vasicek credit portfolio model

    No full text
    This paper is devoted to the parameterization of correlations in the Vasicek credit portfolio model. First, we analytically approximate standard errors for value-at-risk and expected shortfall based on the standard errors of intra-cohort correlations. Second, we introduce a novel copula-based maximum likelihood estimator for inter-cohort correlations and derive an analytical expression of the standard errors. Our new approach enhances current methods in terms of both computing time and, most importantly, direct uncertainty quantification. Both contributions can be used to quantify a margin of conservatism, which is required by regulators. Third, we illustrate powerful procedures that reduce the well-known bias of current estimators, showing their favorable properties. Further, an open-source implementation of all estimators in the novel R package AssetCorr is provided and selected estimators are applied to Moody’s Default & Recovery Database

    Deep calibration of financial models: turning theory into practice

    Get PDF
    The calibration of financial models is laborious, time-consuming and expensive, and needs to be performed frequently by financial institutions. Recently, the application of artificial neural networks (ANNs) for model calibration has gained interest. This paper provides the first comprehensive empirical study on the application of ANNs for calibration based on observed market data. We benchmark the performance of the ANN approach against a real-life calibration framework that is in action at a large financial institution. The ANN based calibration framework shows competitive calibration results, roughly four times faster with less computational efforts. Besides speed and efficiency, the resulting model parameters are found to be more stable over time, enabling more reliable risk reports and business decisions. Furthermore, the calibration framework involves multiple validation steps to counteract regulatory concerns regarding its practical application
    corecore